Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Transl Pediatr ; 12(11): 2030-2043, 2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38130586

RESUMO

Background: Accurately predicting waiting time for patients is crucial for effective hospital management. The present study examined the prediction of outpatient waiting time in a Chinese pediatric hospital through the use of machine learning algorithms. If patients are informed about their waiting time in advance, they can make more informed decisions and better plan their visit on the day of admission. Methods: First, a novel classification method for the outpatient clinic in the Chinese pediatric hospital was proposed, which was based on medical knowledge and statistical analysis. Subsequently, four machine learning algorithms [linear regression (LR), random forest (RF), gradient boosting decision tree (GBDT), and K-nearest neighbor (KNN)] were used to construct prediction models of the waiting time of patients in four department categories. Results: The three machine learning algorithms outperformed LR in the four department categories. The optimal model for Internal Medicine Department I was the RF model, with a mean absolute error (MAE) of 5.03 minutes, which was 47.60% lower than that of the LR model. The optimal model for the other three categories was the GBDT model. The MAE of the GBDT model was decreased by 28.26%, 35.86%, and 33.10%, respectively compared to that of the LR model. Conclusions: Machine learning can predict the outpatient waiting time of pediatric hospitals well and ease patient anxiety when waiting in line without medical appointments. This study offers key insights into enhancing healthcare services and reaffirms the dedication of Chinese pediatric hospitals to providing efficient and patient-centric care.

2.
Cancer Sci ; 114(2): 690-701, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36114747

RESUMO

Accurately predicting patient survival is essential for cancer treatment decision. However, the prognostic prediction model based on histopathological images of stomach cancer patients is still yet to be developed. We propose a deep learning-based model (MultiDeepCox-SC) that predicts overall survival in patients with stomach cancer by integrating histopathological images, clinical data, and gene expression data. The MultiDeepCox-SC not only automatedly selects patches with more information for survival prediction, without manual labeling for histopathological images, but also identifies genetic and clinical risk factors associated with survival in stomach cancer. The prognostic accuracy of the MultiDeepCox-SC (C-index = 0.744) surpasses the result only based on histopathological image (C-index = 0.660). The risk score of our model was still an independent predictor of survival outcome after adjustment for potential confounders, including pathologic stage, grade, age, race, and gender on The Cancer Genome Atlas dataset (hazard ratio 1.555, p = 3.53e-08) and the external test set (hazard ratio 2.912, p = 9.42e-4). Our fully automated online prognostic tool based on histopathological images, clinical data, and gene expression data could be utilized to improve pathologists' efficiency and accuracy (https://yu.life.sjtu.edu.cn/DeepCoxSC).


Assuntos
Aprendizado Profundo , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/genética , Prognóstico , Fatores de Risco
3.
Comput Biol Med ; 144: 105387, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35305502

RESUMO

Multi-modality magnetic resonance imaging (MRI) can reveal distinct patterns of tissue in the human body and is crucial to clinical diagnosis. But it still remains a challenge to obtain diverse and plausible multi-modality MR images due to expense, noise, and artifacts. For the same lesion, different modalities of MRI have big differences in context information, coarse location, and fine structure. In order to achieve better generation and segmentation performance, a dual-scale multi-modality perceptual generative adversarial network (DualMMP-GAN) is proposed based on cycle-consistent generative adversarial networks (CycleGAN). Dilated residual blocks are introduced to increase the receptive field, preserving structure and context information of images. A dual-scale discriminator is constructed. The generator is optimized by discriminating patches to represent lesions with different sizes. The perceptual consistency loss is introduced to learn the mapping between the generated and target modality at different semantic levels. Moreover, generative multi-modality segmentation (GMMS) combining given modalities with generated modalities is proposed for brain tumor segmentation. Experimental results show that the DualMMP-GAN outperforms the CycleGAN and some state-of-the-art methods in terms of PSNR, SSMI, and RMSE in most tasks. In addition, dice, sensitivity, specificity, and Hausdorff95 obtained from segmentation by GMMS are all higher than those from a single modality. The objective index obtained by the proposed methods are close to upper bounds obtained from real multiple modalities, indicating that GMMS can achieve similar effects as multi-modality. Overall, the proposed methods can serve as an effective method in clinical brain tumor diagnosis with promising application potential.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Artefatos , Neoplasias Encefálicas/diagnóstico por imagem , Coleta de Dados , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...